Technical Sessions

Session Session 10

Edge Computing

Conference
10:10 AM — 11:20 AM JST
Local
Jun 26 Sat, 9:10 PM — 10:20 PM EDT

Neuron Manifold Distillation for Edge Deep Learning

Zeyi Tao (William and Mary, USA); Qi Xia (The College of William and Mary, USA); Qun Li (College of William and Mary, USA)

0
Although deep convolutional neural networks (CNNs) show their extraordinary power in various object detection tasks, they are infeasible to be deployed on resource constrained devices or embedded systems due to their high computational cost. Efforts such as model compression have been used at an expense of accuracy loss. Recent approach knowledge distillation (KD) is based on a student-teacher paradigm that aims at transferring model knowledge from a well-trained model (teacher) to a smaller and faster model (student) which can significantly reduce computational cost, memory usage and prolong the battery lifetime. However, with the improvement of model deployability, conventional KD methods cause low model generalization ability and introduce accuracy losses. In this work, we propose a novel approach, neuron manifold distillation (NMD), that the student model not only imitates teacher's output activations, it also learns the feature geometry structure of the teacher. Therefore, we harvest a high-quality, compact, and lightweight student model. We conduct comprehensive experiments with different distillation configurations over multiple datasets, and the proposed method elaborates a consistent improvement in accuracy-speed trade-offs for the distilled model.

Computation Offloading Scheduling for Deep Neural Network Inference in Mobile Computing

Yubin Duan and Jie Wu (Temple University, USA)

0
The quality of service (QoS) of intelligent applications on mobile devices heavily depends on the inference speed of Deep Neural Network (DNN) models. Cooperative DNN inference has become an efficient way to reduce inference latency. In cooperative inference, a mobile device offloads a part of its inference task to cloud servers. The large communication volume usually is the bottleneck of such systems. Priory research focuses on reducing the communication volume by finding optimal partition points. We notice that the computation and communication resources on mobile devices can work in pipeline, which can hide the communication time behind computation and further reduce the inference latency. Based on the observation, we formulate the offloading pipeline scheduling problem. We aim to find the optimal sequence of DNN execution and offloading for mobile devices such that the inference latency is minimized. If we use a directed acyclic graph (DAG) to model a DNN, the complex precedence constraints in DAGs bring challenges to our problem. Notice that most DNN models have independent paths or tree structures, we present an optimal path-wise DAG scheduler and an optimal layer-wise scheduler for tree-structure DAGs. Then, we proposed a heuristic based on topological sort to schedule general-structure DAGs. The prototype of our offloading scheme is implemented on a real-world testbed, where we use Raspberry Pi as the mobile device and lab PCs as the cloud. Various DNN models are tested and our scheme can reduce their inference latencies in different network environments.

Drag-JDEC: A Deep Reinforcement Learning and Graph Neural Network-based Job Dispatching Model in Edge Computing

Zhaoyang Yu, Wenwen Liu, Xiaoguang Liu and Gang Wang (Nankai University, China)

0
The emergence of edge computing eases latency pressure in remote cloud and computing pressure of terminal devices, providing new solutions for real-time applications. Jobs of end devices are offloaded to a server in the cloud or an edge cluster for execution. Unreasonable job dispatching strategies will not only affect the completion time of tasks violating the users' QoS but also reduce the resource utilization of servers increasing the operating costs of service providers. In this paper, we propose an online job dispatching model named Drag-JDEC based on deep reinforcement learning and graph neural network. For natural directed acyclic graph-type jobs, we use a graph attention network to aggregate the features of neighbor nodes and transform them into high-dimensional ones. Combining with the current status of edge servers, the deep reinforcement learning module makes the dispatching decision for each task in the job to keep load balancing and meet the users' QoS. Experiments using real job data sets show that Drag-JDEC outperforms traditional algorithms for balancing the workload of edge servers and adapts to various edge server configurations well, reaching the maximum improvement of 34.43%.

Joint D2D Collaboration and Task Offloading for Edge Computing: A Mean Field Graph Approach

Xiong Wang (The Chinese University of Hong Kong, Hong Kong); Jiancheng Ye (Huawei, Hong Kong); John C.S. Lui (The Chinese University of Hong Kong, Hong Kong)

0
Mobile edge computing (MEC) facilitates computation offloading to edge server, as well as task processing via device-to-device (D2D) collaboration. Existing works mainly focus on centralized network-assisted offloading solutions, which are unscalable to scenarios involving collaboration among massive users. In this paper, we propose a joint framework of decentralized D2D collaboration and efficient task offloading for a large-population MEC system. Specifically, we utilize the power of two choices for D2D collaboration, which enables users to beneficially assist each other in a decentralized manner. Due to short-range D2D communication and user movements, we formulate a mean field model on a finite-degree and dynamic graph to analyze the state evolution of D2D collaboration. We derive the existence, uniqueness and convergence of the state stationary point so as to provide a tractable collaboration performance. Complementing this D2D collaboration, we further build a Stackelberg game to model users' task offloading, where edge server is the leader to determine a service price, while users are followers to make offloading decisions. By embedding the Stackelberg game into Lyapunov optimization, we develop an online offloading and pricing scheme, which could optimize server's service utility and users' system cost simultaneously. Extensive evaluations show that our D2D collaboration can mitigate users' workloads by 73.8% and task offloading can achieve high energy efficiency.

Session Chair

Fangxin Wang, Chinese University of Hong Kong (Shenzhen), China

Session Session 11

Reinforcement Learning

Conference
12:30 PM — 1:40 PM JST
Local
Jun 26 Sat, 11:30 PM — 12:40 AM EDT

DATE: Disturbance-Aware Traffic Engineering with Reinforcement Learning in Software-Defined Networks

Minghao Ye (New York University, USA); Junjie Zhang (Fortinet, Inc., USA); Zehua Guo (Beijing Institute of Technology, China); H. Jonathan Chao (NYU Tandon School of Engineering, USA)

0
Traffic Engineering (TE) has been applied to optimize network performance by routing/rerouting flows based on traffic loads and network topologies. To cope with network dynamics from emerging applications, it is essential to reroute flows more frequently than today's TE to maintain network performance. However, existing TE solutions may introduce considerable Quality of Service (QoS) degradation and service disruption since they do not take the potential negative impact of flow rerouting into account. In this paper, we apply a new QoS metric named network disturbance to gauge the impact of flow rerouting while optimizing network load balancing in backbone networks. To employ this metric in TE design, we propose a disturbance-aware TE called DATE, which uses Reinforcement Learning (RL) to intelligently select some critical flows between nodes for each traffic matrix and reroute them using Linear Programming (LP) to jointly optimize network performance and disturbance. DATE is equipped with a customized actor-critic architecture and Graph Neural Networks (GNNs) to handle dynamic traffic and single link failures. Extensive evaluations show that DATE can outperform state-of-the-art TE methods with close-to-optimal load balancing performance while effectively mitigating the 99th percentile network disturbance by up to 31.6%.

A Multi-Objective Reinforcement Learning Perspective on Internet Congestion Control

Zhenchang Xia (Wuhan University, China); Yanjiao Chen (Zhejiang University, China); Libing Wu, Yu-Cheng Chou, Zhicong Zheng and Haoyang Li (Wuhan University, China); Baochun Li (University of Toronto, Canada)

0
The advent of new network architectures has resulted in the rise of network applications with different network performance requirements: live video streaming applications require low latency. In contrast, file transfer applications require high throughput. Existing congestion control protocols may fail to simultaneously meet the performance requirements of these different types of applications since their designed objective function is fixed and difficult to readjust according to the needs of the application. In this paper, we develop MOCC (Multi-Objective Congestion Control), a novel multi-objective congestion control protocol that can meet the performance requirements of different applications without the need to redesign the objective function. MOCC leverages multi-objective reinforcement learning with preferences in order to adapt to different types of applications. By addressing challenges such as slow convergence speed and the difficulty of designing the end of the episode, MOCC can quickly converge to the equilibrium point and adapt multi-objective reinforcement learning to congestion control. Through an extensive array of experiments, we discover that MOCC outperforms the most recent state-of-the-art congestion control protocols and can achieve a trade-off between throughput, latency, and packet loss, meeting the performance requirements of different types of applications by setting preferences.

Throughput Maximization for Wireless Powered Communication: Reinforcement Learning Approaches

Yanjun Li, Xiaofeng Su and Huatong Jiang (Zhejiang University of Technology, China); Chung Shue Chen (Nokia Bell Labs, France)

0
To maximize the throughput of wireless powered communication (WPC), it is critical for the device to decide when to harvest energy, when to transmit data and what transmit power to use. In this paper, we consider a WPC system with a single device using harvest-store-transmit protocol and aim to maximize the longterm average throughput with optimal allocation of the energy harvesting time, data transfer time and the device's transmit power. With the consideration of many practical constraints including finite battery capacity, time-varying channels and non-linear energy harvesting model, we propose both deep Q-learning (DQL) and actor-critic (AC) approaches to solve the problem and obtain fully online policies. Simulation results show that the performance of our proposed AC approach comes close to that achieved by value iteration and is superior to DQL and other baseline algorithm. Meanwhile, its space complexity is 2-3 orders of magnitude less than that required by value iteration.

Distributed and Adaptive Traffic Engineering with Deep Reinforcement Learning

Nan Geng, Mingwei Xu, Yuan Yang, Chenyi Liu, Jiahai Yang, Qi Li and Shize Zhang (Tsinghua University, China)

0
Lots of studies focus on distributed traffic engineering (TE) where routers make routing decisions independently. Existing approaches usually tackle distributed TE problems through traditional optimization methods. However, due to the intrinsic complexity of the distributed TE problems, routing decisions cannot be obtained efficiently, which leads to significant performance degradation, especially for highly dynamic traffic. Emerging machine learning technologies like deep reinforcement learning (DRL) provide a new choice to address TE problems in an experience-driven method. In this paper, we propose DATE, a distributed and adaptive TE framework with DRL. DATE distributes well-trained agents to the routers in the located network. Each agent makes local routing decisions independently based on link utilization ratios flooded by each router periodically. To coordinate the distributed agents to achieve the global optimization in different traffic conditions, we construct candidate paths, develop the agents carefully, and realize a virtual environment to train the agents with a DRL algorithm. We do extensive simulations and experiments using real-world network topologies with both real and synthetic traffic traces. The results show that DATE outperforms some existing approaches and yields near-optimal performance with superior robustness.

Session Chair

En Wang, Jilin University, China

Session Session 12

IoT & Data Processing

Conference
1:50 PM — 3:00 PM JST
Local
Jun 27 Sun, 12:50 AM — 2:00 AM EDT

EXTRA: An Experience-driven Control Framework for Distributed Stream Data Processing with a Variable Number of Threads

Teng Li, Zhiyuan Xu, Jian Tang and Kun Wu (Syracuse University, USA); Yanzhi Wang (Northeastern University, USA)

0
In this paper, we present design, implementation and evaluation of a control framework, EXTRA (EXperience-driven conTRol frAmework), for scheduling in general-purpose Distributed Stream Data Processing Systems (DSDPSs). Our design is novel due to the following reasons. First, EXTRA enables a DSDPS to dynamically change the number of threads on the fly according to system states and demands. Most existing methods, however, use a fixed number of threads to carry workload (for each processing unit of an application), which is specified by a user in advance and does not change during runtime. So our design introduces a whole new dimension for control in DSDPSs, which has a great potential to significantly improve system flexibility and efficiency, but makes the scheduling problem much harder. Second, EXTRA leverages an experience/data driven model-free approach for dynamic control using the emerging Deep Reinforcement Learning (DRL), which enables a DSDPS to learn the best way to control itself from its own experience just as a human learns a skill (such as driving and swimming) without any accurate and mathematically solvable model. We implemented it based on a widely-used DSDPS, Apache Storm, and evaluated its performance with three representative Stream Data Processing (SDP) applications: continuous queries, word count (stream version) and log stream processing. Particularly, we performed experiments under realistic settings (where multiple application instances are mixed up together), rather than a simplified setting (where experiments are conducted only on a single application instance) used in most related works. Extensive experimental results show: 1) Compared to Storm's default scheduler and the state-of-the-art model-based method, EXTRA substantially reduces average end-to-end tuple processing time by 39.6% and 21.6% respectively on average. 2) EXTRA does lead to more flexible and efficient stream data processing by enabling the use of a variable number of threads. 3) EXTRA is robust in a highly dynamic environment with significant workload change.

Isolayer: The Case for an IoT Protocol Isolation Layer

Jiamei Lv, Gonglong Chen and Wei Dong (Zhejiang University, China)

0
Internet of Things (IoT), which connects a large number of devices with wireless connectivity, has come into the spotlight. Various wireless radio technologies and application protocols are proposed. Due to scarce channel resources, different network traffic may do interact in negative ways. This paper argues that there should be an isolation layer in IoT network communication stacks making each traffic's perception of the wireless channel independent of what other traffic is running. We present Isolayer, an isolation layer design providing fine-grained and flexible channel isolation services in the heterogeneous IoT networks. By a shared collision avoidance module, Isolayer can provide effective isolation even between different wireless technologies (e.g., BLE and 802.15.4). Isolayer provides four levels of isolation services for users, i.e., protocol level, packet-type level and source-/destination-address level. Considering the various isolation requirements in practice, we design a domain-specific language for users to specify the key logic of their requirements. Taking the codes as input, Isolayer generates the control packets automatically and lets related nodes that receive the control packets update their isolation services correspondingly. We implement Isolayer on realistic IoT nodes, i.e., TI CC2650, Heltec LoRa node 151, and perform extensive evaluations. The results show that: (1) Isolayer incurs acceptable overhead in terms of delay and memory usage; (2) Isolayer provides effective isolation service in the heterogeneous IoT network. (3) Isolayer achieves about 18.6% reduction of the end-to-end delay of isolated packets in the IoT network with heavy traffic load.

No Wait, No Waste: A Novel and Efficient Coordination Algorithm for Multiple readers in RFID Systems

Qiuying Yang and Xuan Liu (Hunan University, China); Song Guo (The Hong Kong Polytechnic University, Hong Kong)

0
How to efficiently coordinate multiple readers to work together is critical for high throughput in RFID systems. Existing researchers focus on designing efficient reader scheduling strategies that arrange adjacent readers to work in different time to avoid signal collisions. However, the impact of unbalanced tag number of readers on tag read throughput is still very challenging. In RFID systems, the distribution of tags is usually variable and uneven, which makes the number of tags covered by each reader (i.e. the load of it) imbalanced. This imbalance leads to different execution time for readers: the heavy load readers take longer time to collect all tags, while the other readers that finish execution earlier have to wait for nothing. To avoid this useless waiting and improve the system throughput, this paper focuses on the load balancing problem of multiple readers. It is an NP-hard problem, for which we design heuristic algorithms that adjust readers' interrogation regions according to designed strategies to efficiently balance their loads. The amazing advantage of our algorithm is that it can be adopted by almost all existing protocols in multi-reader systems, including the reader scheduling protocol, to improve system throughput. Extensive experiments demonstrate that our algorithm can significantly improve the throughput in various scenarios.

Snapshot for IoT: Adaptive Measurement for Multidimensional QoS Resource

Yuyu Zhao, Guang Cheng, Chunxiang Liu and Zihan Chen (Southeast University, China)

0
With the increasing and extensive use of intelligent Internet of things (IoT) devices, its operational aspect in the network has become a significant dependent data for network QoS management and scheduling. For the resilient intelligent IoT cluster with flexible increase and decrease of devices and heterogeneous operating systems, this paper proposes an adaptive measurement method MRAM, which can snapshot the multidimensional QoS resources view (MRV) of the IoT devices in cluster. MRAM uses the measurement offloading architecture based on extensible gateway platform and cloud computing to liberate the local resources of monitored IoT devices. Based on the improved LSTM algorithm, the MRV's mutations detection method ELSTM is designed. Newly collected QoS resource can be judged whether mutations have occurred and adaptive measurement state machine is enabled by ELSTM. According to the state machine which ensures that the MRV is updated timely and reflected the current status of the cluster, MRAM adjusts the measurement granularity in real time. This method provides a high time efficiency global profile for the upper QoS services and reduces the impact of measurement on the IoT devices. A real environment is built to test the performance of this method. MRAM has high measurement accuracy and the precision of mutations detection is 98.29%. It converges the update MRV of second level under the condition of IoT devices' low consumption of storage and CPU utilization.

Session Chair

Anurag Kumar, Indian Institute of Science, India

Session Session 13

Performance

Conference
3:10 PM — 4:20 PM JST
Local
Jun 27 Sun, 2:10 AM — 3:20 AM EDT

HierTopo: Towards High-Performance and Efficient Topology Optimization for Dynamic Networks

Jing Chen, Zili Meng, Yaning Guo and Mingwei Xu (Tsinghua University, China); Hongxin Hu (University at Buffalo, USA)

0
Dynamic networks have enabled dynamically adapting the network topology to meet the need of real-time traffic demands. However, due to the complexity of topology optimization, existing solutions suffer from a trade-off between performance and efficiency, which either have large optimality gaps or excessive optimization overhead. To break through this trade-off, our key observation is that we could offload the optimization procedure to every network node to handle the complexity. Thus, we propose HierTopo, a hierarchical topology optimization method for dynamic networks that achieves both high performance and efficiency. HierTopo firstly runs a local policy on each network node to aggregate network information into low-dimension features, then uses these features to make global topology decisions. Evaluation on real-world network traces shows that HierTopo outperforms the state-of-the-art solutions by 11.52-38.91% with only milliseconds of decision latency, and is also superior in generalization ability.

wCompound: Enhancing Performance of Multipath Transmission in High-speed and Long Distance Networks

Rui Zhuang, Yitao Xing, Wenjia Wei, Yuan Zhang, Jiayu Yang and Kaiping Xue (University of Science and Technology of China, China)

0
As the user demand for data transmission over high-speed and long distance networks increases significantly, multipath TCP (MPTCP) shows a great potential to further improve the utilization of high-speed and long distance network resources than traditional TCP, and provides better quality of service (QoS). It has been reported that TCP causes serious waste of bandwidth in high-speed and long distance networks, while MPTCP allows the simultaneous use of multiple network paths between two distant hosts, and thus provides better resource utilization, higher throughput and smoother failure recovery for applications. However, the existing multipath congestion control algorithms cannot perfectly meet the efficiency requirements of high-speed and long distance network, since they mainly emphasize fairness rather than other critical indicators of QoS such as throughput, but still encounter fairness issues when coexist with various TCP variants. To solve these problems, we develop weighted Compound (wCompound), a loss-based and delay-based compound multipath congestion control algorithm which is originated from Compound TCP, and is applicable to high-speed and long distance networks. Different from the traditional methods of setting an empirical value as the threshold, wCompound innovatively adopts a dynamic threshold to adaptively adjusts the transmission rate of each subflow based on current network state, so as to effectively couple all subflows and fully utilize the network capacity. Moreover, with the cooperation of delay-based component and loss-based component, wCompound also ensures good fairness to different types of TCP variants. We implement wCompound in a Linux kernel, and conduct sufficient experiments on our testbed. The results show that wCompound achieves higher utilization of network resources and can always maintain an appropriate throughput no matter competing with loss-based or delay-based network traffic.

Demystifying the Relationship Between Network Latency and Mobility on High-Speed Rails: Measurement and Prediction

Xiangxiang Wang and Jiangchuan Liu (Simon Fraser University, Canada); Fangxin Wang (The Chinese University of Hong Kong, Shenzhen, China); Ke Xu (Tsinghua University, China)

0
Recent years have seen increasing attention on building High-Speed Railways (HSR) in many countries. Trains running on the railways have a top velocity of up to over 300 km/hour. This makes it become a scenario with unstable connection qualities. In this paper, we propose a novel model that can accurately estimate the mobility status on HSR based on the changing patterns of network latency. Though various impact factors make the prediction complex, we however argue that the recent advance of deep learning applies well in our context, and further we design a neural network model that can estimate the moving velocity based on monitoring network latency's changing patterns in a short period. In this model, we use a new variable called Round Difference Time (RDT) to describe latency's changing patterns. We also use the Fourier Transform to extract the hidden time-frequency and use the generated spectrum for estimation. Our data-driven evaluations show that with suitable parameters, this model can get an accuracy of up to 94% on all three lines.

Time-expanded Method Improving Throughput in Dynamic Renewable Networks

Jianhui Zhang, Siqi Guan and Jiacheng Wang (Hangzhou Dianzi University, China); Liming Liu (Hangzhou Dianzi University, China); Hanxiang Wang (Hangzhou Dianzi University, China); Feng Xia (Federation University Australia, Australia)

0
In the Dynamic Rechargeable Networks (DRNs), the existing studies usually consider the spatio-temporal dynamics of the harvested energy so as to maximize the throughput by efficient energy allocation. However, the network dynamics have seldom been considered simultaneously including the time variable link quality, communication power and battery charge efficiency. Besides these dynamics, the wireless interference brings extra challenge. To take these dynamics into account together, this paper studies the quite challenging problem, the network throughput maximization in the DRNs, by proper energy allocation while considering the additional affection of wireless interference. We introduce the Time-Expanded Graph (TEG) to describe the above dynamics in a feasible easy way, and then look into the scenario under which there is only one pair of source-target firstly. To maximize the throughput, this paper designs the Single Pair Throughput maximization (SPT) algorithm based on TEG while considering the wireless interference. In the case of multiple pairs of source-targets, it's quite complex to solve the network throughput maximization problem directly. This paper introduces the Garg and K¨onemanns framework and then designs the Multiple Pairs Throughput (MPT) algorithm to maximize the overall throughput of all pairs. MPT is a fast approximation solution with the ratio of 1-3ϵ, where 0 < ϵ < 1 is a small positive constant. This paper conducts the extensive numerical evaluation based on the simulated data and the data collected by the real energy harvested system. The numerical simulation results demonstrate the throughput improvement of our algorithms.

Session Chair

Yifei Zhu, Shanghai Jiao Tong University, China

Session Session 14

Systems

Conference
4:30 PM — 5:40 PM JST
Local
Jun 27 Sun, 3:30 AM — 4:40 AM EDT

Eunomia: Efficiently Eliminating Abnormal Results in Distributed Stream Join Systems

Jie Yuan (Huazhong University of Science and Technology, China); Yonghui Wang (Huazhong University of Science and Technoledge, China); Hanhua Chen, Hai Jin and Haikun Liu (Huazhong University of Science and Technology, China)

0
With the emergence of big data applications, stream join systems are widely used in extracting valuable information among multi-source streams. However, providing completeness of processing results in a large-scale distributed stream join system is challenging because it is hard to guarantee the consistency between all instances especially in a distributed environment. The abnormal result can make the quality of achieved data unacceptable in practice.

In this paper, we propose Eunomia, a novel distributed stream join system which leverages an ordered propagation model for efficiently eliminating abnormal results. We design a light-weighted self-adaptive strategy to adjust the structure in the model according to the dynamic stream input rate and workloads. It can improve the scalability and performance significantly. We implement Eunomia and conduct comprehensive experiments to evaluate its performance. Experimental results show that Eunomia eliminates abnormal results to guarantee the completeness, and improves the system throughput by 25\% and reduces the processing latency by 74\% compared to state-of-the-art designs.

Exploiting Outlier Value Effects in Sparse Urban CrowdSensing

En Wang, Mijia Zhang, Yongjian Yang and Yuanbo Xu (Jilin University, China); Jie Wu (Temple University, USA)

0
Sparse spatiotemporal data completion is crucial in Mobile CrowdSensing for urban application scenarios. In fact, accurate urban data completion can enhance data expression, improve urban analysis, and ultimately guide city planning. However, it is a non-trivial task to consider outlier values caused by the special events (e.g., parking peak, traffic congestion, or festival parade) in spatiotemporal data completion because of the following challenges: 1) the rarity and unpredictability, 2) the inconsistency compared to normal values, and 3) the complex spatiotemporal relations. In spite of the considerable improvements, recent deep learning-based methods overlook the existence of outlier values, which results in misidentifying these values. To this end, focusing on spatiotemporal data, we propose a matrix completion method that takes outlier value effects into consideration. Specifically, an outlier value model is proposed by adding a memory network and modifying the loss function to traditional matrix completion. Along this line, we extract the features of outlier values and further efficiently complete and predict the unsensed data. Finally, we conduct both qualitative and quantitative experiments on three different datasets, and the results demonstrate that the performance of our method outperforms the state-of-the-art baselines.

PQR: Prediction-supported Quality-aware Routing for Uninterrupted Vehicle Communication

Wenquan Xu, Xuefeng Ji and Chuwen Zhang (Tsinghua University, China); Beichuan Zhang (University of Arizona, USA); Yu Wang (Temple University, USA); Xiaojun Wang (Dublin City University, Ireland); Yunsheng Wang (Kettering University, USA); Jianping Wang (City University of Hong Kong, Hong Kong); Bin Liu (Tsinghua University, China)

0
Vehicle to Vehicle (V2V) communication opens a new way to make vehicles directly communicate with each other, providing faster responses for time-sensitive tasks than cellular networks. Effective V2V routing protocols are essential yet challenging, as the high dynamic road environment makes communication easy to break. Many prediction methods proposed in the existing protocols to address this issue are either flawed or have a poor effect. In this paper, to cope with the two aspects of the problems that cause communication interrupt, i.e., link breaks and route quality degradation, we innovate an acceleration-based trajectory prediction algorithm to estimate the link duration, and a machine learning model to predict route quality. Based on the prediction algorithms, we propose PQR, a Prediction-supported Quality-aware Routing protocol, which can proactively switch to a better route before the current link breaks or the route quality degrades. Especially, considering the limitations of the current routing protocols, we elaborate a new hybrid routing protocol that integrates the topology-based method and location-based method to achieve instant communication. Simulation results show that PQR outperforms the existing protocols in Packet Delivery Ratio (PDR), Roundtrip Time (RTT), and Normalized Routing Overhead (NRO). Specifically, we have also implemented a vehicular testbed to demonstrate PQR's real-world performance, and results show that PQR achieves almost no packet loss with latency less than 10ms during route handoff for topology change.

ChirpMu: Chirp Based Imperceptible Information Broadcasting with Music

Yu Wang, Xiaojun Zhu and Hao Han (Nanjing University of Aeronautics and Astronautics, China)

0
This paper presents ChirpMu, a system that encodes information into chirp symbols and embeds the symbols with music. Users enjoy music without realizing the existence of chirp sounds, while their smartphones can decode the information. ChirpMu can be used to broadcast information such as Wi-Fi secrets or coupons in shopping malls. It features novel chirp symbol design that can combat sound attenuation and environment noise. In addition, ChirpMu properly adjusts the portion of chirp symbols with music, so that the chirp symbols cannot be heard by users but can be decoded by smartphones with low error rate. Various real-world experiments show that ChirpMu can achieve low bit error rate.

Session Chair

Jian Li, University of Science and Technology of China, China

Made with in Toronto · Privacy Policy · IWQoS 2020 · © 2021 Duetone Corp.